Goto

Collaborating Authors

 generative adversarial network


Stabilizing Training of Generative Adversarial Networks through Regularization

Neural Information Processing Systems

Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f -divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning.


Mining GOLD Samples for Conditional GANs

Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin

Neural Information Processing Systems

Training GANs (including cGANs), however, are known to be often hard and highly unstable [46]. Numerous techniques have thus been proposed to tackle the issue from different angles, e.g., improving architectures [32, 56, 7], losses and regularizers [16, 38, 20] and other training heuristics [46, 51, 8].




Self-SupervisedGenerativeAdversarialCompression

Neural Information Processing Systems

Somemodelcompression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks(GANs) performing complextasks.






Domain Re-Modulation for Few-Shot Generative Domain Adaptation Yi Wu, Ziqiang Li University of Science and Technology of China Chaoyue Wang, Heliang Zheng, Shanshan Zhao JD Explore Academy Bin Li

Neural Information Processing Systems

In this study, we delve into the task of few-shot Generative Domain Adaptation (GDA), which involves transferring a pre-trained generator from one domain to a new domain using only a few reference images. Inspired by the way human brains acquire knowledge in new domains, we present an innovative generator structure called Domain Re-Modulation (DoRM) .